Learning of Visual Modules from Examples: A Framework for
Understanding Adaptive Visual Performance
Tomaso Poggio, Shimon Edelman and Manfred Fahle
Networks that solve specific visual tasks, such as the evaluation of spatial
relations with hyperacuity precision, can be easily synthesized from a small
set of examples. The present paper describes a series of simulated psychophysical
experiments that replicate human performance in hyperacuity tasks. The experiments
were conducted with a detailed computational model of perceptual learning,
based on HyperBF interpolation. The success of the simulations provides
a new angle on the purposive aspect of human vision, in which the capability
for solving any given task emerges only if the need for it is dictated by
the environment. we conjecture that almost any tractable psychophysical
task can be performed better after suitable training, provided the necessary
information is available in the stimulus. c1 1992 Academic Press, Inc.